Management/Promotion Packet
Written up properly here: The Debossification of Management
References
At Google promotions seem to heavily depend on high-visibility impact. Therefore, people scramble to work on visible project to be able to get a promotion. This makes “important but not visible” teams hard to staff.
Requirement: well-defined competency matrix for all levels.
Quora:
Promotions at Google are modeled after the process used to grant tenure and promotions to university faculty. Engineers write a self-assessment of their own accomplishments, ask their peers and manager for written feedback, and then an independent committee of senior engineers reviews the written feedback, scores from the manager, code, design docs, etc. and makes a final decision. Although time-intensive, it's much more fair than a traditional system where the manager makes decisions unilaterally.
Problem statement
There are a few problems promotion packets attempt to remove:
- Expectation clarification and calibration: being a Staff Engineer in one part of the organization ought to mean the same thing as being a Staff Engineer in another, for fairness reasons.
- The best way to achieve this is to formalize the requirements and have them constantly be tested. We need to ask ourselves explicitly: are the requirements unambiguous enough to make a call? Do all requirements always be met or do we need to loosen them?
- Manager bias in promotion decisions: as managers we have natural confirmation bias towards the people in our team: we want them to do well, and tend to see the belief that they are be confirmed.
- External validation of this progression would result in a split of responsibilities where the manager can focus on growth opportunities and support of the candidate, whereas the judgment of the success is externalized.
Process
When an IC or their manager want to work toward a seniority promotion (same role, different level), they start preparing a promotion packet: a document explaining the rationale behind the promotion combined with evidence the person is ready.
This document is submitted by the employee in question to the employee’s manager’s manager for review (who in turn can request review of other employees as well) as part of the end of year performance review cycle.
A decision about the promotion will be made based on this document as part of the review cycle. This decision can be:
- Accept: promotion will take effect by January 1 of the next year.
- Postpone: not ready yet, but close — to be revisited in 6 months (potential mid-year promotion), here are areas to work on.
- Reject: not ready yet, here are areas to work on.
Advantage of also having an official Postpone decision: helps in preparing the H2 budget (better visibility on mid-year comp updates).
Foundational assumption: we promote people when they already perform at the level (for enough time to give sufficient evidence). We don’t promote based on potential.
Applicability: these promotion packets apply to seniority promotions, not to promotions to different roles (e.g. from IC to manager, because it is hard to perform in the role without having it).
Alternatives considered
The approach described above is inspired, but not the same as Google’s, specifically:
- The content of the packet is modeled on what I understand they look like at Google (although I could not find any examples or templates).
- I believe the act of writing down the requirements and collecting the evidence is the core of the value of this promotion process. In many cases simply performing this exercise will clarify if somebody is ready, or what still needs to happen to get promoted, even if nobody ever gets to review the packet.
- The archive of these packets will also be valuable in time to see how our standards evolve.
- If we want to go “radical transparency” on this, we may want to consider making (accepted) promotion packets public.
- At Google, employees submit themselves for promotion (no involvement of their manager required).
- This is a nice feature (avoids manager positive or negative bias), but I think not essential from day 1 at Mattermost.
- At Google, promotion packets reviews are reviewed by a promotion committee of people in the same role, but +2 levels above and ideally in another part of the org (so people don’t know the candidate personally). This is not feasible at Mattermost’s scale.
- Compromise: have more senior managers do it, somewhat removed from the candidate in question.
- Alternative compromise: no formal approval, but promotion packets will be linked in the calibration sheet for anybody to review.
- At Google, responses from the committee are binding. That is: if they say: you are not ready for a promotion, but do A, B and C and you will be — this will guarantee a promotion when these requirements are fulfilled (even when reviewed by a different committee next time).
- This is a valuable feature, but impractical at Mattermost’s scale and business reality.
- At Google, there are all sort of mechanisms to appeal decisions.
- This is impractical and too heavy-weight at Mattermost’s scale.
Open questions
- Peer reviews: there are two use cases for these:
- Collect general feedback (as we already collect now for EOY performance reviews)
- Collect evidence for some of seniority check list — for these we’d need to either expand the form for peer reviews ad-hoc, or send out a separate survey to potential people who could provide this evidence.
How should these be handled?
Template (concept)
Name:
Current level:
Next level:
Level requirement diff
Based on Level spreadsheet explain what the differences are between the current level and current level+1.
Note: This does not need to be done from scratch for every packet, we can template these or have a place where you can copy & paste this from.
For instance for L4 (SDE II) -> L5 (Senior SDE):
- Sets and delivers architectural vision for high impact features and changes across the product stack and test automation infrastructure.
- Defines new feature assignments for themselves, usually without requiring help.
- Inspires, organizes and enables groups of open source community members to contribute to development campaigns in building significant new functionality.
- Recognized by colleagues and community as a technical authority, passively influencing discussions and behavior, and working in sync with PM, UX, and customer teams.
- Can independently make decisions affecting customer value/impact for complex topics within a product/feature-set.
- Leads cross-product, cross-feature, and cross-team discussions related to customer value/impact, and can bring the stakeholders to a decision point.
- Independently researches new technologies and paradigms to improve team efficiency.
- Frequently called upon to comment on customer and community discussions. Is very comfortable in customer and community discussions, aligns efforts, and develops superior solutions through discussion and analysis. Participates deeply in cross-team efforts. Begins to lead discussions on topics that reach outside of engineering.
Evidence
Demonstrate you meet at least 80% (?) of these expectations with 2-3 examples each.
1. Delivers architectural vision for high impact features
Project A: I lead the project to do X. I have gathered input in this project from A, B, and C, and created a doc as a result found here.
2. Defines new feature assignments for themselves
Project A: I proceeded to plan out the project, created tickets in JIRA (link, link, link) and started implementing them. Here are links to some of the main PRs: link, link, link.
3. Inspires, organizes and enables groups of open source community members
Here is community campaign 1 that I ran, here are the results.
4. Recognized by colleagues and community as a technical authority
Quotes from peer reviews. Ideally showing “X helped me with Y when I was stuck.” “X sat with me and unblocked Y.” etc.
5. Independently makes decisions affecting customer value/impact
See this mattermost discussion, see these meeting notes where I demonstrate this.
6. Leads cross-product, cross-feature, and cross-team discussion
Project A
Project B
7. Independently researches new technologies and paradigms
Investigation 1
Investigation 2
8. Frequently called upon to comment on customer and community discussions
Examples: here and here and here.
Peer reviews
Purpose: detect blind spots, give general sense of peer feedback.
Reviewer A says:
Bla bla bla
Home | About